38 research outputs found

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Design Strategies for Playful Technologies to Support Light-intensity Physical Activity in the Workplace

    Full text link
    Moderate to vigorous intensity physical activity has an established preventative role in obesity, cardiovascular disease, and diabetes. However recent evidence suggests that sitting time affects health negatively independent of whether adults meet prescribed physical activity guidelines. Since many of us spend long hours daily sitting in front of a host of electronic screens, this is cause for concern. In this paper, we describe a set of three prototype digital games created for encouraging light-intensity physical activity during short breaks at work. The design of these kinds of games is a complex process that must consider motivation strategies, interaction methodology, usability and ludic aspects. We present design guidelines for technologies that encourage physical activity in the workplace that we derived from a user evaluation using the prototypes. Although the design guidelines can be seen as general principles, we conclude that they have to be considered differently for different workplace cultures and workspaces. Our study was conducted with users who have some experience playing casual games on their mobile devices and were able and willing to increase their physical activity.Comment: 11 pages, 5 figures. Video: http://living.media.mit.edu/projects/see-saw

    Expanding social mobile games beyond the device screen

    Get PDF
    Emerging pervasive games use sensors, graphics and networking technologies to provide immersive game experiences integrated with the real world. Existing pervasive games commonly rely on a device screen for providing game-related information, while overlooking opportunities to include new types of contextual interactions like jumping, a punching gesture, or even voice to be used as game inputs. We present the design of Spellbound, a physical mobile team-based game, to help contribute to our understanding of how we can design pervasive games that aim to nurture a spirit of togetherness. We also briefly touch upon how togetherness and playfulness can transform physical movement into a desirable activity in the user evaluation section. Spellbound is an outdoor pervasive team-based physical game. It takes advantage of the above-mentioned opportunities and integrates real-world actions like jumping and spinning with a virtual world. It also replaces touch-based input with voice interaction and provides glanceable and haptic feedback using custom hardware in the true spirit of social play characteristic of traditional children’s games. We believe Spellbound is a form of digital outdoor gaming that anchors enjoyment on physical action, social interaction, and tangible feedback. Spellbound was well received in user evaluation playtests which confirmed that the main design objective of enhancing a sense of togetherness was largely met

    SPOTR: Spatio-temporal Pose Transformers for Human Motion Prediction

    Full text link
    3D human motion prediction is a research area of high significance and a challenge in computer vision. It is useful for the design of many applications including robotics and autonomous driving. Traditionally, autogregressive models have been used to predict human motion. However, these models have high computation needs and error accumulation that make it difficult to use them for realtime applications. In this paper, we present a non-autogressive model for human motion prediction. We focus on learning spatio-temporal representations non-autoregressively for generation of plausible future motions. We propose a novel architecture that leverages the recently proposed Transformers. Human motion involves complex spatio-temporal dynamics with joints affecting the position and rotation of each other even though they are not connected directly. The proposed model extracts these dynamics using both convolutions and the self-attention mechanism. Using specialized spatial and temporal self-attention to augment the features extracted through convolution allows our model to generate spatio-temporally coherent predictions in parallel independent of the activity. Our contributions are threefold: (i) we frame human motion prediction as a sequence-to-sequence problem and propose a non-autoregressive Transformer to forecast a sequence of poses in parallel; (ii) our method is activity agnostic; (iii) we show that despite its simplicity, our approach is able to make accurate predictions, achieving better or comparable results compared to the state-of-the-art on two public datasets, with far fewer parameters and much faster inference

    LMExplainer: a Knowledge-Enhanced Explainer for Language Models

    Full text link
    Large language models (LMs) such as GPT-4 are very powerful and can process different kinds of natural language processing (NLP) tasks. However, it can be difficult to interpret the results due to the multi-layer nonlinear model structure and millions of parameters. Lack of understanding of how the model works can make the model unreliable and dangerous for everyday users in real-world scenarios. Most recent works exploit the weights of attention to provide explanations for model predictions. However, pure attention-based explanation is unable to support the growing complexity of the models, and cannot reason about their decision-making processes. Thus, we propose LMExplainer, a knowledge-enhanced interpretation module for language models that can provide human-understandable explanations. We use a knowledge graph (KG) and a graph attention neural network to extract the key decision signals of the LM. We further explore whether interpretation can also help AI understand the task better. Our experimental results show that LMExplainer outperforms existing LM+KG methods on CommonsenseQA and OpenBookQA. We also compare the explanation results with generated explanation methods and human-annotated results. The comparison shows our method can provide more comprehensive and clearer explanations. LMExplainer demonstrates the potential to enhance model performance and furnish explanations for the reasoning processes of models in natural language

    XplainLLM: A QA Explanation Dataset for Understanding LLM Decision-Making

    Full text link
    Large Language Models (LLMs) have recently made impressive strides in natural language understanding tasks. Despite their remarkable performance, understanding their decision-making process remains a big challenge. In this paper, we look into bringing some transparency to this process by introducing a new explanation dataset for question answering (QA) tasks that integrates knowledge graphs (KGs) in a novel way. Our dataset includes 12,102 question-answer-explanation (QAE) triples. Each explanation in the dataset links the LLM's reasoning to entities and relations in the KGs. The explanation component includes a why-choose explanation, a why-not-choose explanation, and a set of reason-elements that underlie the LLM's decision. We leverage KGs and graph attention networks (GAT) to find the reason-elements and transform them into why-choose and why-not-choose explanations that are comprehensible to humans. Through quantitative and qualitative evaluations, we demonstrate the potential of our dataset to improve the in-context learning of LLMs, and enhance their interpretability and explainability. Our work contributes to the field of explainable AI by enabling a deeper understanding of the LLMs decision-making process to make them more transparent and thereby, potentially more reliable, to researchers and practitioners alike. Our dataset is available at: https://github.com/chen-zichen/XplainLLM_dataset.gitComment: 17 pages, 6 figures, 7 tables. Our dataset is available at: https://github.com/chen-zichen/XplainLLM_dataset.gi

    Speculating on biodesign in the future home

    Get PDF
    The home is a place of shelter, a place for family, and for separation from other parts of life, such as work. Global challenges, the most pressing of which are currently the COVID-19 pandemic and climate change has forced extra roles into many homes and will continue to do so in the future. Biodesign integrates living organisms into designed solutions and can offer opportunities for new kinds of technologies to facilitate a transition to the home of the future. Many families have had to learn to work alongside each other, and technology has mediated a transition from standard models of operation for industries. These are the challenges of the 21st century that mandate careful thinking around interactive systems and innovations that support new ways of living and working at home. In this workshop, we will explore opportunities for biodesign interactive systems in the future home. We will bring together a broad group of researchers in HCI, design, and biosciences to build the biodesign community and discuss speculative design futures. The outcome will generate an understanding of the role of interactive biodesign systems at home, as a place with extended functionalities

    Expanding social mobile games beyond the device screen

    No full text

    Activity-based outdoor mobile multiplayer game

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.119Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 109-118).Traditional outdoor recreation is physically and emotionally rewarding from goal directed social activities and encourages a connection with the real world but can be logistically difficult. Online gaming allows people to play together despite physical distances and differences in time zones. Players enjoy new experiences in awe-inspiring interactive worlds while effectively inactive. This project is a physically active outdoor social game that embeds a layer of fantasy and challenge in the real world employing location-based technologies available on mobile phones. Requiring the game be multiplayer in real-time and played in a physical space presents certain limitations in the design of input and output mechanics. This project demonstrates how those constraints were managed to create a compelling experience. A sixteen people evaluation validates the concept while observations and feedback suggest future improvements.by Misha Sra.S.M
    corecore